51 research outputs found
Modeling, Analysis and Optimization of the Thermal Performance of Air Conditioners
Uncertainty estimation is important for interpreting the trustworthiness of
machine learning models in many applications. This is especially critical in
the data-driven active learning setting where the goal is to achieve a certain
accuracy with minimum labeling effort. In such settings, the model learns to
select the most informative unlabeled samples for annotation based on its
estimated uncertainty. The highly uncertain predictions are assumed to be more
informative for improving model performance. In this paper, we explore
uncertainty calibration within an active learning framework for medical image
segmentation, an area where labels often are scarce. Various uncertainty
estimation methods and acquisition strategies (regions and full images) are
investigated. We observe that selecting regions to annotate instead of full
images leads to more well-calibrated models. Additionally, we experimentally
show that annotating regions can cut 50% of pixels that need to be labeled by
humans compared to annotating full images.Comment: Presented at ICML 2020 Workshop on Uncertainty & Robustness in Deep
Learnin
On Batching Variable Size Inputs for Training End-to-End Speech Enhancement Systems
The performance of neural network-based speech enhancement systems is
primarily influenced by the model architecture, whereas training times and
computational resource utilization are primarily affected by training
parameters such as the batch size. Since noisy and reverberant speech mixtures
can have different duration, a batching strategy is required to handle variable
size inputs during training, in particular for state-of-the-art end-to-end
systems. Such strategies usually strive a compromise between zero-padding and
data randomization, and can be combined with a dynamic batch size for a more
consistent amount of data in each batch. However, the effect of these practices
on resource utilization and more importantly network performance is not well
documented. This paper is an empirical study of the effect of different
batching strategies and batch sizes on the training statistics and speech
enhancement performance of a Conv-TasNet, evaluated in both matched and
mismatched conditions. We find that using a small batch size during training
improves performance in both conditions for all batching strategies. Moreover,
using sorted or bucket batching with a dynamic batch size allows for reduced
training time and GPU memory usage while achieving similar performance compared
to random batching with a fixed batch size
Road Roughness Estimation Using Machine Learning
Road roughness is a very important road condition for the infrastructure, as
the roughness affects both the safety and ride comfort of passengers. The roads
deteriorate over time which means the road roughness must be continuously
monitored in order to have an accurate understand of the condition of the road
infrastructure. In this paper, we propose a machine learning pipeline for road
roughness prediction using the vertical acceleration of the car and the car
speed. We compared well-known supervised machine learning models such as linear
regression, naive Bayes, k-nearest neighbor, random forest, support vector
machine, and the multi-layer perceptron neural network. The models are trained
on an optimally selected set of features computed in the temporal and
statistical domain. The results demonstrate that machine learning methods can
accurately predict road roughness, using the recordings of the cost
approachable in-vehicle sensors installed in conventional passenger cars. Our
findings demonstrate that the technology is well suited to meet future pavement
condition monitoring, by enabling continuous monitoring of a wide road network
- …